55 research outputs found

    Towards building a more complex view of the lateral geniculate nucleus: Recent advances in understanding its role

    Get PDF
    The lateral geniculate nucleus (LGN) has often been treated in the past as a linear filter that adds little to retinal processing of visual inputs. Here we review anatomical, neurophysiological, brain imaging, and modeling studies that have in recent years built up a much more complex view of LGN . These include effects related to nonlinear dendritic processing, cortical feedback, synchrony and oscillations across LGN populations, as well as involvement of LGN in higher level cognitive processing. Although recent studies have provided valuable insights into early visual processing including the role of LGN, a unified model of LGN responses to real-world objects has not yet been developed. In the light of recent data, we suggest that the role of LGN deserves more careful consideration in developing models of high-level visual processing

    A multivariate comparison of electroencephalogram and functional magnetic resonance imaging to electrocorticogram using visual object representations in humans

    Get PDF
    Today, most neurocognitive studies in humans employ the non-invasive neuroimaging techniques functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG). However, how the data provided by fMRI and EEG relate exactly to the underlying neural activity remains incompletely understood. Here, we aimed to understand the relation between EEG and fMRI data at the level of neural population codes using multivariate pattern analysis. In particular, we assessed whether this relation is affected when we change stimuli or introduce identity-preserving variations to them. For this, we recorded EEG and fMRI data separately from 21 healthy participants while participants viewed everyday objects in different viewing conditions, and then related the data to electrocorticogram (ECoG) data recorded for the same stimulus set from epileptic patients. The comparison of EEG and ECoG data showed that object category signals emerge swiftly in the visual system and can be detected by both EEG and ECoG at similar temporal delays after stimulus onset. The correlation between EEG and ECoG was reduced when object representations tolerant to changes in scale and orientation were considered. The comparison of fMRI and ECoG overall revealed a tighter relationship in occipital than in temporal regions, related to differences in fMRI signal-to-noise ratio. Together, our results reveal a complex relationship between fMRI, EEG, and ECoG signals at the level of population codes that critically depends on the time point after stimulus onset, the region investigated, and the visual contents used

    Health economic analysis of the integrated cognitive assessment tool to aid dementia diagnosis in the United Kingdom

    Get PDF
    ObjectivesThe aim of this study was to develop a comprehensive economic evaluation of the integrated cognitive assessment (ICA) tool compared with standard cognitive tests when used for dementia screening in primary care and for initial patient triage in memory clinics.MethodsICA was compared with standard of care comprising a mixture of cognitive assessment tools over a lifetime horizon and employing the UK health and social care perspective. The model combined a decision tree to capture the initial outcomes of the cognitive testing with a Markov structure that estimated long-term outcomes of people with dementia. Quality of life outcomes were quantified using quality-adjusted life years (QALYs), and the economic benefits were assessed using net monetary benefit (NMB). Both costs and QALYs were discounted at 3.5% per annum and cost-effectiveness was assessed using a threshold of £20,000 per QALY gained.ResultsICA dominated standard cognitive assessment tools in both the primary care and memory clinic settings. Introduction of the ICA tool was estimated to result in a lifetime cost saving of approximately £123 and £226 per person in primary care and memory clinics, respectively. QALY gains associated with early diagnosis were modest (0.0016 in primary care and 0.0027 in memory clinic). The net monetary benefit (NMB) of ICA introduction was estimated at £154 in primary care and £281 in the memory clinic settings.ConclusionIntroduction of ICA as a tool to screen primary care patients for dementia and perform initial triage in memory clinics could be cost saving to the UK public health and social care payer

    A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans

    Get PDF
    Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches

    Health economic analysis of the integrated cognitive assessment tool to aid dementia diagnosis in the United Kingdom.

    Get PDF
    ObjectivesThe aim of this study was to develop a comprehensive economic evaluation of the integrated cognitive assessment (ICA) tool compared with standard cognitive tests when used for dementia screening in primary care and for initial patient triage in memory clinics.MethodsICA was compared with standard of care comprising a mixture of cognitive assessment tools over a lifetime horizon and employing the UK health and social care perspective. The model combined a decision tree to capture the initial outcomes of the cognitive testing with a Markov structure that estimated long-term outcomes of people with dementia. Quality of life outcomes were quantified using quality-adjusted life years (QALYs), and the economic benefits were assessed using net monetary benefit (NMB). Both costs and QALYs were discounted at 3.5% per annum and cost-effectiveness was assessed using a threshold of £20,000 per QALY gained.ResultsICA dominated standard cognitive assessment tools in both the primary care and memory clinic settings. Introduction of the ICA tool was estimated to result in a lifetime cost saving of approximately £123 and £226 per person in primary care and memory clinics, respectively. QALY gains associated with early diagnosis were modest (0.0016 in primary care and 0.0027 in memory clinic). The net monetary benefit (NMB) of ICA introduction was estimated at £154 in primary care and £281 in the memory clinic settings.ConclusionIntroduction of ICA as a tool to screen primary care patients for dementia and perform initial triage in memory clinics could be cost saving to the UK public health and social care payer

    How Can Selection of Biologically Inspired Features Improve the Performance of a Robust Object Recognition Model?

    Get PDF
    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition

    A Stable Biologically Motivated Learning Mechanism for Visual Feature Extraction to Handle Facial Categorization

    Get PDF
    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task

    Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation

    No full text
    <div><p>Inferior temporal (IT) cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total), testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT.</p></div
    • …
    corecore